robotic eye
Can Robotic Cues Manipulate Human Decisions? Exploring Consensus Building via Bias-Controlled Non-linear Opinion Dynamics and Robotic Eye Gaze Mediated Interaction in Human-Robot Teaming
Kumar, Rajul, Bhatti, Adam, Yao, Ningshi
Although robots are becoming more advanced with human-like anthropomorphic features and decision-making abilities to improve collaboration, the active integration of humans into this process remains under-explored. This article presents the first experimental study exploring decision-making interactions between humans and robots with visual cues from robotic eyes, which can dynamically influence human opinion formation. The cues generated by robotic eyes gradually guide human decisions towards alignment with the robot's choices. Both human and robot decision-making processes are modeled as non-linear opinion dynamics with evolving biases. To examine these opinion dynamics under varying biases, we conduct numerical parametric and equilibrium continuation analyses using tuned parameters designed explicitly for the presented human-robot interaction experiment. Furthermore, to facilitate the transition from disagreement to agreement, we introduced a human opinion observation algorithm integrated with the formation of the robot's opinion, where the robot's behavior is controlled based on its formed opinion. The algorithms developed aim to enhance human involvement in consensus building, fostering effective collaboration between humans and robots. Experiments with 51 participants (N = 51) show that human-robot teamwork can be improved by guiding human decisions using robotic cues. Finally, we provide detailed insights on the effects of trust, cognitive load, and participant demographics on decision-making based on user feedback and post-experiment interviews.
- North America > United States > Virginia > Fairfax County > Fairfax (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Virginia > Fairfax County > McLean (0.04)
- (3 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Leisure & Entertainment (0.67)
- Health & Medicine > Therapeutic Area (0.46)
An Efficient Learning Control Framework With Sim-to-Real for String-Type Artificial Muscle-Driven Robotic Systems
Tao, Jiyue, Zhang, Yunsong, Rajendran, Sunil Kumar, Zhang, Feitian, Zhao, Dexin, Shen, Tongsheng
Robotic systems driven by artificial muscles present unique challenges due to the nonlinear dynamics of actuators and the complex designs of mechanical structures. Traditional model-based controllers often struggle to achieve desired control performance in such systems. Deep reinforcement learning (DRL), a trending machine learning technique widely adopted in robot control, offers a promising alternative. However, integrating DRL into these robotic systems faces significant challenges, including the requirement for large amounts of training data and the inevitable sim-to-real gap when deployed to real-world robots. This paper proposes an efficient reinforcement learning control framework with sim-to-real transfer to address these challenges. Bootstrap and augmentation enhancements are designed to improve the data efficiency of baseline DRL algorithms, while a sim-to-real transfer technique, namely randomization of muscle dynamics, is adopted to bridge the gap between simulation and real-world deployment. Extensive experiments and ablation studies are conducted utilizing two string-type artificial muscle-driven robotic systems including a two degree-of-freedom robotic eye and a parallel robotic wrist, the results of which demonstrate the effectiveness of the proposed learning control strategy.
- Asia > China > Beijing > Beijing (0.04)
- North America > United States > Maryland > Montgomery County > Takoma Park (0.04)
Design and Evaluation of a Bioinspired Tendon-Driven 3D-Printed Robotic Eye with Active Vision Capabilities
Osooli, Hamid, Rahaghi, Mohsen Irani, Ahmadzadeh, S. Reza
The field of robotics has seen significant advancements in recent years, particularly in the development of humanoid robots. One area of research that has yet to be fully explored is the design of robotic eyes. In this paper, we propose a computer-aided 3D design scheme for a robotic eye that incorporates realistic appearance, natural movements, and efficient actuation. The proposed design utilizes a tendon-driven actuation mechanism, which offers a broad range of motion capabilities. The use of the minimum number of servos for actuation, one for each agonist-antagonist pair of muscles, makes the proposed design highly efficient. Compared to existing ones in the same class, our designed robotic eye comprises aesthetic and realistic features. We evaluate the robot's performance using a vision-based controller, which demonstrates the effectiveness of the proposed design in achieving natural movement, and efficient actuation. The experiment code, toolbox, and printable 3D sketches of our design have been open-sourced.
- North America > United States > Massachusetts > Middlesex County > Lowell (0.14)
- Asia > Middle East > Iran > Isfahan Province > Isfahan (0.04)
- Health & Medicine > Therapeutic Area (0.97)
- Materials > Chemicals > Commodity Chemicals > Petrochemicals (0.68)
Japanese engineers put a pair of googly eyes on a self-driving car
A comic pair of googly eyes on the front of a self-driving car could reduce traffic accidents, a new study suggests. Researchers in Japan fitted a golf cart with two large, remote-controlled robotic eyes, making it look like the beloved children's TV character'Brum'. In experiments in virtual reality (VR), they found pedestrians were able to make'safer or more efficient choices' when the eyes were fitted than when they weren't. According to the researchers, pedestrians generally like to look at vehicle drivers to know that they've registered their presence. But in a future where self-driving cars are commonplace, pedestrians won't be able to do this as the driver's seat will be empty.
- North America > United States > Arizona > Maricopa County > Tempe (0.05)
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.05)
- Transportation > Passenger (1.00)
- Transportation > Ground > Road (1.00)
- Information Technology > Robotics & Automation (1.00)
- Automobiles & Trucks (1.00)
Can eyes on self-driving cars reduce accidents?
Robotic eyes on autonomous vehicles could improve pedestrian safety, according to a new study at the University of Tokyo. Participants played out scenarios in virtual reality (VR) and had to decide whether to cross a road in front of a moving vehicle or not. When that vehicle was fitted with robotic eyes, which either looked at the pedestrian (registering their presence) or away (not registering them), the participants were able to make safer or more efficient choices. Self-driving vehicles seem to be just around the corner. Whether they'll be delivering packages, plowing fields or busing kids to school, a lot of research is underway to turn a once futuristic idea into reality. While the main concern for many is the practical side of creating vehicles that can autonomously navigate the world, researchers at the University of Tokyo have turned their attention to a more "human" concern of self-driving technology.
- Transportation > Ground > Road (0.73)
- Information Technology > Robotics & Automation (0.73)
- Transportation > Passenger (0.53)
Robotic eyes on self-driving vehicles could reduce road accidents, says research
To ensure pedestrian safety from self-driving vehicles, a new study has proposed that installing robotic eyes on autonomous vehicles affects the psychology of walkers, improving their safety. During a research carried out by the University of Tokyo, participants were placed in a virtual reality (VR) environment where they were given the choice to cross a road in front of a moving vehicle. The result showed the participants could make safer or more efficient decisions when the vehicle was equipped with robotic eyes that either looked at the pedestrian (registering their presence) or away (not registering their presence). The major difference with self-driving vehicles, the researchers say, is that drivers may become more of a passenger. They may not be giving full attention on the road, or there may be nobody at the wheel at all.
- Transportation > Passenger (0.36)
- Transportation > Ground > Road (0.36)
Study finds robotic eyes on autonomous vehicles could reduce road accidents
Tokyo (Japan), September 20 (ANI): According to a new study from the University of Tokyo, robotic eyes on autonomous vehicles could improve pedestrian safety. Participants acted out scenarios in virtual reality (VR), deciding whether or not to cross a road in front of a moving vehicle. Participants were able to make safer or more efficient choices when that vehicle was outfitted with robotic eyes that either looked at the pedestrian (registering their presence) or away (not registering their presence). Self-driving vehicles seem to be just around the corner. Whether they'll be delivering packages, ploughing fields or busing kids to school, a lot of research is underway to turn a once futuristic idea into reality.
- Transportation > Ground > Road (0.37)
- Education (0.35)
Emergence of human oculomotor behavior from optimal control of a cable-driven biomimetic robotic eye
Alitappeh, Reza Javanmard, John, Akhil, Dias, Bernardo, van Opstal, A. John, Bernardino, Alexandre
In human-robot interactions, eye movements play an important role in non-verbal communication. However, controlling the motions of a robotic eye that display similar performance as the human oculomotor system is still a major challenge. In this paper, we study how to control a realistic model of the human eye with a cable-driven actuation system that mimics the six degrees of freedom of the extra-ocular muscles. The biomimetic design introduces novel challenges to address, most notably the need to control the pretension on each individual muscle to prevent the loss of tension during motion, that would lead to cable slack and lack of control. We built a robotic prototype and developed a nonlinear simulator and two controllers. In the first approach, we linearized the nonlinear model, using a local derivative technique, and designed linear-quadratic optimal controllers to optimize a cost function that accounts for accuracy, energy expenditure, and movement duration. The second method uses a recurrent neural network that learns the nonlinear system dynamics from sample trajectories of the system, and a non-linear trajectory optimization solver that minimizes a similar cost function. We focused on the generation of rapid saccadic eye movements with fully unconstrained kinematics, and the generation of control signals for the six cables that simultaneously satisfied several dynamic optimization criteria. The model faithfully mimics the three-dimensional rotational kinematics and dynamics observed for human saccades. Our experimental results indicate that while both methods yielded similar results, the nonlinear method is more flexible for future improvements to the model, for which the calculations of the linearized model's position-dependent pretensions and local derivatives become particularly tedious.
- North America > United States > New York (0.04)
- Europe > Portugal > Lisbon > Lisbon (0.04)
- Europe > Netherlands > Gelderland > Nijmegen (0.04)
- Asia > Middle East > Iran (0.04)
- Health & Medicine > Therapeutic Area > Neurology (0.93)
- Energy (0.69)